[GLUTEN-11550][VL][UT] Enable Variant test suites#11726
Open
baibaichen wants to merge 1 commit intoapache:mainfrom
Open
[GLUTEN-11550][VL][UT] Enable Variant test suites#11726baibaichen wants to merge 1 commit intoapache:mainfrom
baibaichen wants to merge 1 commit intoapache:mainfrom
Conversation
4a21089 to
794a1fe
Compare
|
Run Gluten Clickhouse CI on x86 |
Enable GlutenVariantEndToEndSuite, GlutenVariantShreddingSuite, and GlutenParquetVariantShreddingSuite for both spark40 and spark41. Fixes: 1. VeloxValidatorApi: Detect variant shredded structs (produced by Spark's PushVariantIntoScan) by checking __VARIANT_METADATA_KEY metadata. Triggers fallback to Spark's native Parquet reader. 2. VeloxSparkPlanExecApi: Reject to_json with non-struct/map/array child types (e.g. VariantType), falling back to Spark since Velox does not support VariantType in to_json. 3. Spark41Shims: Detect Parquet variant logical type annotations and fall back to vanilla Spark when PARQUET_IGNORE_VARIANT_ANNOTATION is not set, since Velox native reader does not check variant annotations. 4. pom.xml: Add -Dfile.encoding=UTF-8 to test JVM args. On JDK 17 and earlier, java.nio.charset.Charset.defaultCharset() is determined by the OS locale. On CI containers (centos-8/9) where LANG=C, the default charset is US-ASCII (ANSI_X3.4-1968). JDK 18+ changed this via JEP 400 (https://openjdk.org/jeps/400) to always default to UTF-8 regardless of locale. Spark's VariantUtil.getString() uses new String(byte[], offset, length) without specifying charset, which decodes using the JVM default charset. With JDK 17 + LANG=C, UTF-8 encoded multi-byte characters (e.g. Chinese) are decoded as ASCII, producing garbled output. Call chain: VariantEndToEndSuite.check("\"你好,世界...\"") -> to_json(parse_json(col("v"))) -> StructsToJsonEvaluator.evaluate() -> JacksonGenerator.write(VariantVal) -> VariantVal.toJson() -> Variant.toJsonImpl() -> VariantUtil.getString(byte[], pos) -> new String(value, start, length) // no charset specified https://github.com/apache/spark/blob/v4.0.1/common/variant/src/main/java/org/apache/spark/types/variant/VariantUtil.java#L508 https://github.com/apache/spark/blob/v4.1.0/common/variant/src/main/java/org/apache/spark/types/variant/VariantUtil.java#L509 Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
794a1fe to
c6759c0
Compare
|
Run Gluten Clickhouse CI on x86 |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
What changes are proposed in this pull request?
Enable
GlutenVariantEndToEndSuite,GlutenVariantShreddingSuite, andGlutenParquetVariantShreddingSuitefor both spark40 and spark41.Four fixes:
VeloxValidatorApi.scala: Detect variant shredded structs (produced by Spark's
PushVariantIntoScan) by checking for__VARIANT_METADATA_KEYmetadata on struct fields. Triggers fallback to Spark's native Parquet reader since Velox cannot read variant shredding encoding.VeloxSparkPlanExecApi.scala + ExpressionRestrictions.scala: Reject
to_jsonwith non-struct/map/array child types (e.g.VariantType), falling back to Spark since Velox does not support VariantType in to_json.Spark41Shims.scala + ParquetMetadataUtils.scala + VeloxBackend.scala: Detect Parquet variant logical type annotations and fall back to vanilla Spark when
PARQUET_IGNORE_VARIANT_ANNOTATIONis not set, since Velox native reader does not check variant annotations.pom.xml: Add
-Dfile.encoding=UTF-8to test JVM args. On JDK 17 and earlier,Charset.defaultCharset()is determined by the OS locale. On CI containers (centos-8/9) whereLANG=C, the default charset is US-ASCII. JDK 18+ changed this via JEP 400 to always default to UTF-8 regardless of locale. Spark'sVariantUtil.getString()usesnew String(byte[], offset, length)without specifying charset, causing garbled output for multi-byte characters (e.g. Chinese) on JDK 17 + LANG=C.How was this patch tested?
GlutenVariantEndToEndSuite14✅,GlutenVariantShreddingSuite8✅,GlutenParquetVariantShreddingSuite5✅GlutenVariantEndToEndSuite14✅,GlutenVariantShreddingSuite8✅,GlutenParquetVariantShreddingSuite7✅parse_json/to_json round-trippasses underLANG=C LC_ALL=Cwith-Dfile.encoding=UTF-8Was this patch authored or co-authored using generative AI tooling?
Generated-by: GitHub Copilot CLI
Related issue: #11550
Note: This PR subsumes #11723 (GlutenParquetVariantShreddingSuite).